Crossover phenomenon in adversarial attacks on voter model

نویسندگان

چکیده

Abstract A recent study (Chiyomaru and Takemoto 2022 Phys. Rev. E 106 014301) considered adversarial attacks conducted to distort voter model dynamics in networks. This method intervenes the interaction patterns of individuals induces them be a target opinion state through small perturbation ε . In this study, we investigate on random networks finite size n The exit probability P +1 reach absorbing mean time τ consensus are analyzed mean-field approximation. Given > 0, converges asymptotically unity as increases. scales ( ln ϵ n stretchy="false">) / for homogeneous with large By contrast, it (\epsilon\mu_1^2n/\mu_2))/\epsilon$?> form="prefix">ln μ 1 2 heterogeneous , where µ 1 2 represent first second moments degree distribution, respectively. Moreover, observe crossover phenomenon from linear scale logarithmic find $n_{\mathrm{co}}\sim \epsilon^{-1/\alpha}$?> c mathvariant="normal">o ∼ − α above which all nodes becomes time. Here, α = $\alpha (\gamma-1)/2$?> = γ scale-free exponent $2\lt\gamma\lt3$?> < 3

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Adversarial Attacks on Image Recognition

The purpose of this project is to extend the work done by Papernot et al. in [4] on adversarial attacks in image recognition. We investigated whether a reduction in feature dimensionality can maintain a comparable level of misclassification success while increasing computational efficiency. We formed an attack on a black-box model with an unknown training set by forcing the oracle to misclassif...

متن کامل

LatentPoison - Adversarial Attacks On The Latent Space

Robustness and security of machine learning (ML) systems are intertwined, wherein a non-robust ML system (classifiers, regressors, etc.) can be subject to attacks using a wide variety of exploits. With the advent of scalable deep learning methodologies, a lot of emphasis has been put on the robustness of supervised, unsupervised and reinforcement learning algorithms. Here, we study the robustne...

متن کامل

Delving into adversarial attacks on deep policies

Adversarial examples have been shown to exist for a variety of deep learning architectures. Deep reinforcement learning has shown promising results on training agent policies directly on raw inputs such as image pixels. In this paper we present a novel study into adversarial attacks on deep reinforcement learning polices. We compare the effectiveness of the attacks using adversarial examples vs...

متن کامل

Adversarial Attacks on Neural Network Policies

Machine learning classifiers are known to be vulnerable to inputs maliciously constructed by adversaries to force misclassification. Such adversarial examples have been extensively studied in the context of computer vision applications. In this work, we show adversarial attacks are also effective when targeting neural network policies in reinforcement learning. Specifically, we show existing ad...

متن کامل

Shielding Google's language toxicity model against adversarial attacks

Background. Lack of moderation in online communities enables participants to incur in personal aggression, harassment or cyberbullying, issues that have been accentuated by extremist radicalisation in the contemporary post-truth politics scenario. This kind of hostility is usually expressed by means of toxic language, profanity or abusive statements. Recently Google has developed a machine-lear...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Journal of physics

سال: 2023

ISSN: ['0022-3700', '1747-3721', '0368-3508', '1747-3713']

DOI: https://doi.org/10.1088/2632-072x/acf90b